581 research outputs found

    The Hidden Energy Cost of Web Advertising

    Get PDF
    Advertising is an important source of income for many websites. To get the attention of the unsuspecting (and probably uninterested) visitors, web advertisements (ads) tend to use elaborate animations and graphics. Depending on the specific technology being used, displaying such ads on the visitor's screen may require a vast amount of CPU-power. Since present day desktop-CPUs can easily use 100W or more, ads may consume a substantial amount of energy. Although it is important for environmental reasons to reduce energy consumption, increasing the number of ads seems to be counterproductive.\ud The goal of this paper is to investigate the power consumption of web advertisements. For this purpose we used an energy meter to measure the differences in PC power consumption while browsing the web normally (thus with ads enabled), and while browsing with ads being blocked.\ud To simulate normal web browsing, we created a browser-based tool called AutoBrowse, which periodically opens an URL from a predefined list. To block advertisements, we used the Adblock Plus extension for Mozilla Firefox. To measure also power consumption with other browsers, we used in addition the Apache HTTP server and its mod_proxy module to act as an ad-blocking proxy server.\ud The measurements on several PCs and browsers show that, on average, the additional energy consumption to display web advertisements is 2.5W. To put this number into perspective, we calculated that the total amount of energy used to display web advertisement is equivalent of the total yearly electricity consumption of nearly 2000 households in the Netherlands. It takes 3,6 “average” wind turbines to generate this amount of energy

    Scalable Detection and Isolation of Phishing

    Get PDF
    This paper presents a proposal for scalable detection and isolation of phishing. The main ideas are to move the protection from end users towards the network provider and to employ the novel bad neighborhood concept, in order to detect and isolate both phishing e-mail senders and phishing web servers. In addition, we propose to develop a self-management architecture that enables ISPs to protect their users against phishing attacks, and explain how this architecture could be evaluated. This proposal is the result of half a year of research work at the University of Twente (UT), and it is aimed at a Ph.D. thesis in 2012

    Remote MIB-item look-up service

    Get PDF
    Despite some deficiencies, the Internet management framework is widely deployed and thousands of management information base (MIB) modules have been defined thus far. These modules are used by implementers of agent software, as well as by managers and management applications, to understand the syntax and semantics of the management information that may be exchanged. At the manager's side, MIB modules are usually stored in separate files, which are maintained by the human manager and read by the management application. Since maintenance of this file repository can be cumbersome, management applications are often confronted with incomplete and outdated information. To solve this "meta-management" problem, this paper discusses the design of a remote look-up service for MIB-item definitions. Such a service facilitates the retrieval of missing MIB module definitions, as well as definitions of individual MIB-items. Initially the service may be provided by a single server, but other servers can be added at later stages to improve performance and prevent copyright problems. It is envisaged that vendors of network equipment will also install servers, to distribute their vendor specific MIB. The paper describes how the service, which is provided on a best effort basis, can be accessed by managers/management applications, and how servers inform each other about the MIB modules they support

    On the standardisation of Web service management operations

    Get PDF
    Given the current interest in TCP/IP network management research towards Web services, it is important to recognise how standardisation can be achieved. This paper mainly focuses on the standardisation of operations and not management information. We state that standardisation should be done by standardising the abstract parts of a WSDL document, i.e. the interfaces and the messages. Operations can vary in granularity and parameter transparency, creating four extreme operation signatures, all of which have advantages and disadvantages

    Using Self-management for Establishing Light Paths in Optical Networks: an Overview

    Get PDF
    Current optical networks are generally composed of multi-service optical switches, which enable forwarding of data at multiple levels. Huge flows at the packet-level (IP-level) may be moved to the optical-level, which is faster than the packet-level. Such move could be beneficial since congested IP networks could be off-loaded, leaving more resources for other IP flows. At the same time, the flows switched at the optical-level would receive better Quality of Service (QoS). The transfer of those flows to the optical-level requires the creation of dedicated light paths to carry them. Currently, two approaches are used for that purpose: the first is based on conventional management techniques and the second is based on GMPLS signalling. In both approaches, the decision which IP flows will be moved to light paths is taken by managers. Therefore, only IP flows explicitly selected by such managers will take advantage of being transferred over light path connections. However, it may be that there are also other large IP flows, not known to the manager, that could potentially profit from being moved to the optical-level. The idea proposed in this paper is therefore to add self-management capabilities to the multi-service optical switches and make them responsible for identifying which IP flows should be moved to the optical level and establish and release light path connections for such flows

    Management and Service-aware Networking Architectures (MANA) for Future Internet Position Paper: System Functions, Capabilities and Requirements

    Get PDF
    Future Internet (FI) research and development threads have recently been gaining momentum all over the world and as such the international race to create a new generation Internet is in full swing: GENI, Asia Future Internet, Future Internet Forum Korea, European Union Future Internet Assembly (FIA). This is a position paper identifying the research orientation with a time horizon of 10 years, together with the key challenges for the capabilities in the Management and Service-aware Networking Architectures (MANA) part of the Future Internet (FI) allowing for parallel and federated Internet(s)

    Report on the Dagstuhl Seminar on Visualization and Monitoring of Network Traffic

    Get PDF
    The Dagstuhl Seminar on Visualization and Monitoring of Network Traffic took place May 17-20, 2009 in Dagstuhl, Germany. Dagstuhl seminars promote personal interaction and open discussion of results as well as new ideas. Unlike at most conferences, the focus is not solely on the presentation of established results but also, and in equal parts, to presentation of results, ideas, sketches, and open problems. The aim of this particular seminar was to bring together experts from the information visualization community and the networking community in order to discuss the state of the art of monitoring and visualization of network traffic. People from the different research communities involved jointly organized the seminar. The co-chairs of the seminar from the networking community were Aiko Pras (University of Twente) and Jürgen Schönwälder (Jacobs University Bremen). The co-chairs from the visualization community were Daniel A. Keim (University of Konstanz) and Pak Chung Wong (Pacific Northwest National Laboratory). Florian Mansmann (University of Konstanz) helped with producing this report. The seminar was organized and supported by Schloss Dagstuhl and the European Network of Excellence for the Management of Internet Technologies and Complex Systems (EMANICS)

    Regulation of Age-Related Protein Toxicity

    Get PDF
    Proteome damage plays a major role in aging and age-related neurodegenerative diseases. Under healthy conditions, molecular quality control mechanisms prevent toxic protein misfolding and aggregation. These mechanisms include molecular chaperones for protein folding, spatial compartmentalization for sequestration, and degradation pathways for the removal of harmful proteins. These mechanisms decline with age, resulting in the accumulation of aggregation-prone proteins that are harmful to cells. In the past decades, a variety of fast- and slow-aging model organisms have been used to investigate the biological mechanisms that accelerate or prevent such protein toxicity. In this review, we describe the most important mechanisms that are required for maintaining a healthy proteome. We describe how these mechanisms decline during aging and lead to toxic protein misassembly, aggregation, and amyloid formation. In addition, we discuss how optimized protein homeostasis mechanisms in long-living animals contribute to prolonging their lifespan. This knowledge might help us to develop interventions in the protein homeostasis network that delay aging and age-related pathologies

    Network and service management

    Full text link
    • 

    corecore